FACEBOOK is now determining if

you are a “fag” just by your photo.


Remember how Obama and Hillary were huge on transgender bathrooms for mentally ill men who want to cut their penises off? It turns out only a microscopic number of Americans are into that and they mostly work at Google, Facebook and in Hollywood.


Facebook’s AI photo analysis software can now find out if you are a “butt surfer” and try to get you to join the DNC! The American Democratic Party likes to be known as the party-of-stick-it-in-anywhere and now Facebook can hunt-down all of the homosexuals just by scanning all of the selfies on the internet! Gays look faggy according to expert Mark Zuckerberg, who has sold hundreds of millions of dollars of face scanning services to the CIA, NSA and the DNC!


Row over AI that 'identifies gay faces'

Image copyright

S

Image caption

TANFORD UNIVERSITY
The study created composite faces judged most and least likely to belong to homosexuals

A facial recognition experiment that claims to be able to distinguish between gay and heterosexual people has sparked a row between its creators and two leading LGBT rights groups.

The Stanford University study claims its software recognises facial features relating to sexual orientation that are not perceived by human observers.

The work has been accused of being "dangerous" and "junk science".

But the scientists involved say these are "knee-jerk" reactions.

Details of the peer-reviewed project are due to be published in the Journal of Personality and Social Psychology.

Narrow jaws

For their study, the researchers trained an algorithm using the photos of more than 14,000 white Americans taken from a dating website.

They used between one and five of each person's pictures and took people's sexuality as self-reported on the dating site.

The researchers said the resulting software appeared to be able to distinguish between gay and heterosexual men and women.

In one test, when the algorithm was presented with two photos where one picture was definitely of a gay man and the other heterosexual, it was able to determine which was which 81% of the time.

With women, the figure was 71%.

"Gay faces tended to be gender atypical," the researchers said. "Gay men had narrower jaws and longer noses, while lesbians had larger jaws."

But their software did not perform as well in other situations, including a test in which it was given photos of 70 gay men and 930 heterosexual men.

When asked to pick 100 men "most likely to be gay" it missed 23 of them.

In its summary of the study, the Economist - which was first to report the research - pointed to several "limitations" including a concentration on white Americans and the use of dating site pictures, which were "likely to be particularly revealing of sexual orientation".

'Reckless findings'

On Friday, two US-based LGBT-focused civil rights groups issued a joint press release attacking the study in harsh terms.

"This research isn't science or news, but it's a description of beauty standards on dating sites that ignores huge segments of the LGBTQ (lesbian, gay, bisexual, transgender and queer/questioning) community, including people of colour, transgender people, older individuals, and other LGBTQ people who don't want to post photos on dating sites," said Jim Halloran, chief digital officer of Glaad, a media-monitoring body.

"These reckless findings could serve as a weapon to harm both heterosexuals who are inaccurately outed, as well as gay and lesbian people who are in situations where coming out is dangerous."

C

Image caption

ampaigners raised concerns about what would happen if surveillance tech tried to make use of the study

The Human Rights Campaign added that it had warned the university of its concerns months ago.

"Stanford should distance itself from such junk science rather than lending its name and credibility to research that is dangerously flawed and leaves the world - and this case, millions of people's lives - worse and less safe than before," said its director of research, Ashland Johnson.

The two researchers involved - Prof Michael Kosinski and Yilun Wang - have since responded in turn, accusing their critics of "premature judgement".

"Our findings could be wrong... however, scientific findings can only be debunked by scientific data and replication, not by well-meaning lawyers and communication officers lacking scientific training," they wrote.

"However, if our results are correct, Glaad and HRC representatives' knee-jerk dismissal of the scientific findings puts at risk the very people for whom their organisations strive to advocate."

'Treat cautiously'

Previous research that linked facial features to personality traits has become unstuck when follow-up studies failed to replicate the findings. This includes the claim that a face's shape could be linked to aggression.

One independent expert, who spoke to the BBC, said he had added concerns about the claim that the software involved in the latest study picked up on "subtle" features shaped by hormones the subjects had been exposed to in the womb.

"These 'subtle' differences could be a consequence of gay and straight people choosing to portray themselves in systematically different ways, rather than differences in facial appearance itself," said Prof Benedict Jones, who runs the Face Research Lab at the University of Glasgow.

It was also important, he said, for the technical details of the analysis algorithm to be published to see if they stood up to informed criticism.

"New discoveries need to be treated cautiously until the wider scientific community - and public - have had an opportunity to assess and digest their strengths and weaknesses," he said.


New AI can guess whether you're gay or straight from a photograph

An algorithm deduced the sexuality of people on a dating site with up to 91% accuracy, raising tricky ethical questions

An illustrated depiction of facial analysis technology similar to that used in the experiment. Illustration: Alamy

Sam Levin in San Francisco

@SamTLevin

email



Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research that suggests machines can have significantly better “gaydar” than humans.

The study from Stanford University – which found that a computer algorithm could correctly distinguish between gay and straight men 81% of the time, and 74% for women – has raised questions about the biological origins of sexual orientation, the ethics of facial-detection technology, and the potential for this kind of software to violate people’s privacy or be abused for anti-LGBT purposes.

The machine intelligence tested in the research, which was published in the Journal of Personality and Social Psychology and first reported in the Economist, was based on a sample of more than 35,000 facial images that men and women publicly posted on a US dating website. The researchers, Michal Kosinski and Yilun Wang, extracted features from the images using “deep neural networks”, meaning a sophisticated mathematical system that learns to analyze visuals based on a large dataset.

The research found that gay men and women tended to have “gender-atypical” features, expressions and “grooming styles”, essentially meaning gay men appeared more feminine and vice versa. The data also identified certain trends, including that gay men had narrower jaws, longer noses and larger foreheads than straight men, and that gay women had larger jaws and smaller foreheads compared to straight women.

Human judges performed much worse than the algorithm, accurately identifying orientation only 61% of the time for men and 54% for women. When the software reviewed five images per person, it was even more successful – 91% of the time with men and 83% with women. Broadly, that means “faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain”, the authors wrote.

Elon Musk says AI could lead to third world war

Read more

The paper suggested that the findings provide “strong support” for the theory that sexual orientation stems from exposure to certain hormones before birth, meaning people are born gay and being queer is not a choice. The machine’s lower success rate for women also could support the notion that female sexual orientation is more fluid.

While the findings have clear limits when it comes to gender and sexuality – people of color were not included in the study, and there was no consideration of transgender or bisexual people – the implications for artificial intelligence (AI) are vast and alarming. With billions of facial images of people stored on social media sites and in government databases, the researchers suggested that public data could be used to detect people’s sexual orientation without their consent.

It’s easy to imagine spouses using the technology on partners they suspect are closeted, or teenagers using the algorithm on themselves or their peers. More frighteningly, governments that continue to prosecute LGBT people could hypothetically use the technology to out and target populations. That means building this kind of software and publicizing it is itself controversial given concerns that it could encourage harmful applications.

But the authors argued that the technology already exists, and its capabilities are important to expose so that governments and companies can proactively consider privacy risks and the need for safeguards and regulations.

“It’s certainly unsettling. Like any new tool, if it gets into the wrong hands, it can be used for ill purposes,” said Nick Rule, an associate professor of psychology at the University of Toronto, who has published research on the science of gaydar. “If you can start profiling people based on their appearance, then identifying them and doing horrible things to them, that’s really bad.”

Rule argued it was still important to develop and test this technology: “What the authors have done here is to make a very bold statement about how powerful this can be. Now we know that we need protections.”

Kosinski was not available for an interview, according to a Stanford spokesperson. The professor is known for his work with Cambridge University on psychometric profiling, including using Facebook data to make conclusions about personality. Donald Trump’s campaign and Brexit supporters deployed similar tools to target voters, raising concerns about the expanding use of personal data in elections.

In the Stanford study, the authors also noted that artificial intelligence could be used to explore links between facial features and a range of other phenomena, such as political views, psychological conditions or personality.

This type of research further raises concerns about the potential for scenarios like the science-fiction movie Minority Report, in which people can be arrested based solely on the prediction that they will commit a crime.

“AI can tell you anything about anyone with enough data,” said Brian Brackeen, CEO of Kairos, a face recognition company. “The question is as a society, do we want to know?”

Brackeen, who said the Stanford data on sexual orientation was “startlingly correct”, said there needs to be an increased focus on privacy and tools to prevent the misuse of machine learning as it becomes more widespread and advanced.

Rule speculated about AI being used to actively discriminate against people based on a machine’s interpretation of their faces: “We should all be collectively concerned.”

Contact the author: sam.levin@theguardian.com

Topics

Face-reading AI will be able to detect your politics and IQ, professor says

Professor whose study suggested technology can detect whether a person is gay or straight says programs will soon reveal traits such as criminal predisposition

 Your photo could soon reveal your political views, says a Stanford professor. Photograph: Frank Baron for the Guardian

6

Shares

94

3

Comments

72

Sam Levin in San Francisco

@SamTLevin

email

 03.00 EDT 07.59 EDT

Voters have a right to keep their political beliefs private. But according to some researchers, it won’t be long before a computer program can accurately guess whether people are liberal or conservative in an instant. All that will be needed are photos of their faces.

Michal Kosinski – the Stanford University professor who went viral last week for research suggesting that artificial intelligence (AI) can detect whether people are gay or straight based on photos – said sexual orientation was just one of many characteristics that algorithms would be able to predict through facial recognition.

Using photos, AI will be able to identify people’s political views, whether they have high IQs, whether they are predisposed to criminal behavior, whether they have specific personality traits and many other private, personal details that could carry huge social consequences, he said.

New AI can guess whether you're gay or straight from a photograph

 

Read more

Kosinski outlined the extraordinary and sometimes disturbing applications of facial detection technology that he expects to see in the near future, raising complex ethical questions about the erosion of privacy and the possible misuse of AI to target vulnerable people.

The face is an observable proxy for a wide range of factors, like your life history, your development factors, whether you’re healthy,” he said.

Faces contain a significant amount of information, and using large datasets of photos, sophisticated computer programs can uncover trends and learn how to distinguish key traits with a high rate of accuracy. With Kosinski’s “gaydar” AI, an algorithm used online dating photos to create a program that could correctly identify sexual orientation 91% of the time with men and 83% with women, just by reviewing a handful of photos.

K



osinski’s research is highly controversial, and faced a huge backlash from LGBT rights groups, which argued that the AI was flawed and that anti-LGBT governments could use this type of software to out gay people and persecute them. Kosinski and other researchers, however, have argued that powerful governments and corporations already possess these technological capabilities and that it is vital to expose possible dangers in an effort to push for privacy protections and regulatory safeguards, which have not kept pace with AI.

Kosinski, an assistant professor of organizational behavior, said he was studying links between facial features and political preferences, with preliminary results showing that AI is effective at guessing people’s ideologies based on their faces.

This is probably because political views appear to be heritable, as research has shown, he said. That means political leanings are possibly linked to genetics or developmental factors, which could result in detectable facial differences.

Kosinski said previous studies have found that conservative politicians tend to be more attractive than liberals, possibly because good-looking people have more advantages and an easier time getting ahead in life.

 

Facebook



Twitter



Pinterest



Michal Kosinski. Photograph: Lauren Bamford

Kosinski said the AI would perform best for people who are far to the right or left and would be less effective for the large population of voters in the middle. “A high conservative score … would be a very reliable prediction that this guy is conservative.”

K



osinski is also known for his controversial work on psychometric profiling, including using Facebook data to draw inferences about personality. The data firm Cambridge Analytica has used similar tools to target voters in support of Donald Trump’s campaign, sparking debate about the use of personal voter information in campaigns.

Facial recognition may also be used to make inferences about IQ, said Kosinski, suggesting a future in which schools could use the results of facial scans when considering prospective students. This application raises a host of ethical questions, particularly if the AI is purporting to reveal whether certain children are genetically more intelligent, he said: “We should be thinking about what to do to make sure we don’t end up in a world where better genes means a better life.”

Some of Kosinski’s suggestions conjure up the 2002 science-fiction film Minority Report, in which police arrest people before they have committed crimes based on predictions of future murders. The professor argued that certain areas of society already function in a similar way.

He cited school counselors intervening when they observe children who appear to exhibit aggressive behavior. If algorithms could be used to accurately predict which students need help and early support, that could be beneficial, he said. “The technologies sound very dangerous and scary on the surface, but if used properly or ethically, they can really improve our existence.”

There are, however, growing concerns that AI and facial recognition technologies are actually relying on biased data and algorithms and could cause great harm. It is particularly alarming in the context of criminal justice, where machines could make decisions about people’s lives – such as the length of a prison sentence or whether to release someone on bail – based on biased data from a court and policing system that is racially prejudiced at every step.

Kosinski predicted that with a large volume of facial images of an individual, an algorithm could easily detect if that person is a psychopath or has high criminal tendencies. He said this was particularly concerning given that a propensity for crime does not translate to criminal actions: “Even people highly disposed to committing a crime are very unlikely to commit a crime.”

H



e also cited an example referenced in the Economist – which first reported the sexual orientation study – that nightclubs and sport stadiums could face pressure to scan people’s faces before they enter to detect possible threats of violence.

Kosinski noted that in some ways, this wasn’t much different from human security guards making subjective decisions about people they deem too dangerous-looking to enter.

The law generally considers people’s faces to be “public information”, said Thomas Keenan, professor of environmental design and computer science at the University of Calgary, noting that regulations have not caught up with technology: no law establishes when the use of someone’s face to produce new information rises to the level of privacy invasion.

Keenan said it might take a tragedy to spark reforms, such as a gay youth being beaten to death because bullies used an algorithm to out him: “Now, you’re putting people’s lives at risk.”

Even with AI that makes highly accurate predictions, there is also still a percentage of predictions that will be incorrect.

You’re going down a very slippery slope,” said Keenan, “if one in 20 or one in a hundred times … you’re going to be dead wrong.”